Mask-Net: Learning Context Aware Invariant Features Using Adversarial Forgetting (Student Abstract)

نویسندگان

چکیده

Training a robust system, e.g., Speech to Text (STT), requires large datasets. Variability present in the dataset, such as unwanted nuances and biases, is reason for need datasets learn general representations. In this work, we propose novel approach induce invariance using adversarial forgetting (AF). Our initial experiments on learning invariant features accent STT task achieve better generalizations terms of word error rate (WER) compared traditional models. We observe an absolute improvement 2.2% 1.3% out-of-distribution in-distribution test sets, respectively.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Context-Aware Generative Adversarial Privacy

Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve...

متن کامل

Context incorporation using context - aware language features

This paper investigates the problem of context incorporation into human language systems and particular in Sentiment Analysis (SA) systems. So far, the analysis of how different features, when incorporated into such systems, improve their performance, has been discussed in a number of studies. However, a complete picture of their effectiveness remains unexplored. With this work, we attempt to e...

متن کامل

Fast Learning of Sprites using Invariant Features

A popular framework for the interpretation of image sequences is the layers or sprite model of e.g. Wang and Adelson (1994), Irani et al. (1994). Jojic and Frey (2001) provide a generative probabilistic model framework for this task, but their algorithm is slow as it needs to search over discretized transformations (e.g. translations, or affines) for each layer. In this paper we show that by us...

متن کامل

Speaker-Invariant Training via Adversarial Learning

We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system. We call the scheme speaker-invariant training (SIT). In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to mini...

متن کامل

Context-Aware Mobile Learning

Recent developments on mobile devices and networks enable new opportunities for mobile learning anywhere, anytime. Furthermore, recent advances on adaptive learning establish the foundations for personalized learning adapted to the characteristics of each individual learner. A mobile learner would perform an educational activity using the infrastructure (e.g. handheld devices, networks) in an e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i13.27047